15 research outputs found

    Tensor-Based Algorithms for Image Classification

    Get PDF
    Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose two novel approaches for image classification. One is a kernel-based reformulation of the previously introduced multidimensional approximation of nonlinear dynamics (MANDy), the other an alternating ridge regression in the tensor train format. We apply both methods to the MNIST and fashion MNIST data set and show that the approaches are competitive with state-of-the-art neural network-based classifiers

    Tensor-based dynamic mode decomposition

    Full text link
    Dynamic mode decomposition (DMD) is a recently developed tool for the analysis of the behavior of complex dynamical systems. In this paper, we will propose an extension of DMD that exploits low-rank tensor decompositions of potentially high-dimensional data sets to compute the corresponding DMD modes and eigenvalues. The goal is to reduce the computational complexity and also the amount of memory required to store the data in order to mitigate the curse of dimensionality. The efficiency of these tensor-based methods will be illustrated with the aid of several different fluid dynamics problems such as the von K\'arm\'an vortex street and the simulation of two merging vortices

    Nearest-Neighbor Interaction Systems in the Tensor-Train Format

    Get PDF
    Low-rank tensor approximation approaches have become an important tool in the scientific computing community. The aim is to enable the simulation and analysis of high-dimensional problems which cannot be solved using conventional methods anymore due to the so-called curse of dimensionality. This requires techniques to handle linear operators defined on extremely large state spaces and to solve the resulting systems of linear equations or eigenvalue problems. In this paper, we present a systematic tensor-train decomposition for nearest-neighbor interaction systems which is applicable to a host of different problems. With the aid of this decomposition, it is possible to reduce the memory consumption as well as the computational costs significantly. Furthermore, it can be shown that in some cases the rank of the tensor decomposition does not depend on the network size. The format is thus feasible even for high-dimensional systems. We will illustrate the results with several guiding examples such as the Ising model, a system of coupled oscillators, and a CO oxidation model

    Multidimensional approximation of nonlinear dynamical systems

    Get PDF
    A key task in the field of modeling and analyzing nonlinear dynamical systems is the recovery of unknown governing equations from measurement data only. There is a wide range of application areas for this important instance of system identification, ranging from industrial engineering and acoustic signal processing to stock market models. In order to find appropriate representations of underlying dynamical systems, various data-driven methods have been proposed by different communities. However, if the given data sets are high-dimensional, then these methods typically suffer from the curse of dimensionality. To significantly reduce the computational costs and storage consumption, we propose the method multidimensional approximation of nonlinear dynamical systems (MANDy) which combines data-driven methods with tensor network decompositions. The efficiency of the introduced approach will be illustrated with the aid of several high-dimensional nonlinear dynamical systems

    Existence and Uniqueness of Solutions of the Koopman--von Neumann Equation on Bounded Domains

    Full text link
    The Koopman--von Neumann equation describes the evolution of a complex-valued wavefunction corresponding to the probability distribution given by an associated classical Liouville equation. Typically, it is defined on the whole Euclidean space. The investigation of bounded domains, particularly in practical scenarios involving quantum-based simulations of dynamical systems, has received little attention so far. We consider the Koopman--von Neumann equation associated with an ordinary differential equation on a bounded domain whose trajectories are contained in the set's closure. Our main results are the construction of a strongly continuous semigroup together with the existence and uniqueness of solutions of the associated initial value problem. To this end, a functional-analytic framework connected to Sobolev spaces is proposed and analyzed. Moreover, the connection of the Koopman--von Neumann framework to transport equations is highlighted

    Tensor-based computation of metastable and coherent sets

    Full text link
    Recent years have seen rapid advances in the data-driven analysis of dynamical systems based on Koopman operator theory -- with extended dynamic mode decomposition (EDMD) being a cornerstone of the field. On the other hand, low-rank tensor product approximations -- in particular the tensor train (TT) format -- have become a valuable tool for the solution of large-scale problems in a number of fields. In this work, we combine EDMD and the TT format, enabling the application of EDMD to high-dimensional problems in conjunction with a large set of features. We derive efficient algorithms to solve the EDMD eigenvalue problem based on tensor representations of the data, and to project the data into a low-dimensional representation defined by the eigenvectors. We extend this method to perform canonical correlation analysis (CCA) of non-reversible or time-dependent systems. We prove that there is a physical interpretation of the procedure and demonstrate its capabilities by applying the method to several benchmark data sets

    Feature space approximation for kernel-based supervised learning

    Full text link
    We propose a method for the approximation of high- or even infinite-dimensional feature vectors, which play an important role in supervised learning. The goal is to reduce the size of the training data, resulting in lower storage consumption and computational complexity. Furthermore, the method can be regarded as a regularization technique, which improves the generalizability of learned target functions. We demonstrate significant improvements in comparison to the computation of data-driven predictions involving the full training data set. The method is applied to classification and regression problems from different application areas such as image recognition, system identification, and oceanographic time series analysis

    Improved local models and new Bell inequalities via Frank-Wolfe algorithms

    Full text link
    In Bell scenarios with two outcomes per party, we algorithmically consider the two sides of the membership problem for the local polytope: constructing local models and deriving separating hyperplanes, that is, Bell inequalities. We take advantage of the recent developments in so-called Frank-Wolfe algorithms to significantly increase the convergence rate of existing methods. As an application, we study the threshold value for the nonlocality of two-qubit Werner states under projective measurements. Here, we improve on both the upper and lower bounds present in the literature. Importantly, our bounds are entirely analytical; moreover, they yield refined bounds on the value of the Grothendieck constant of order three: 1.4367KG(3)1.45461.4367\leqslant K_G(3)\leqslant1.4546. We also demonstrate the efficiency of our approach in multipartite Bell scenarios, and present the first local models for all projective measurements with visibilities noticeably higher than the entanglement threshold. We make our entire code accessible as a Julia library called BellPolytopes.jl.Comment: 16 pages, 3 figure

    Modellierung und Analyse von chemischen Reaktionsnetzwerken, katalytischen Prozessen, fluiden Strömungen und Brownschen Bewegungen

    No full text
    1\. Introduction Part I: Foundations of Tensor Approximation 2\. Tensors in Full Format 2.1. Definition and Notation 2.2. Tensor Calculus 2.2.1. Addition and Scalar Multiplication 2.2.2. Index Contraction 2.2.3. Tensor Multiplication 2.2.4. Tensor Product 2.3. Graphical Representation 2.4. Matricization and Vectorization 2.5. Norms 2.6. Orthonormality 3\. Tensor Decomposition 3.1. Rank-One Tensors 3.2. Canonical Format 3.3. Tucker and Hierarchical Tucker Format 3.4. Tensor-Train Format 3.4.1. Core Notation 3.4.2. Addition and Multiplication 3.4.3. Orthonormalization 3.4.4. Calculating Norms 3.4.5. Conversion 3.5. Modified Tensor-Train Formats 3.5.1. Quantized Tensor-Train Format 3.5.2. Block Tensor-Train Format 3.5.3. Cyclic Tensor-Train Format 4\. Optimization Problems in the Tensor-Train Format 4.1. Overview 4.2. (M)ALS for Systems of Linear Equations 4.2.1. Problem Statement 4.2.2. Retraction Operators 4.2.3. Computational Scheme 4.2.4. Algorithmic Aspects 4.3. (M)ALS for Eigenvalue Problems 4.3.1. Problem Statement 4.3.2. Computational Scheme 4.4. Properties of (M)ALS 4.5. Methods for Solving Initial Value Problems Part II: Progress in Tensor-Train Decompositions 5\. Tensor Representation of Markovian Master Equations 5.1. Markov Jump Processes 5.2. Tensor-Based Representation of Infinitesimal Generators 6\. Nearest- Neighbor Interaction Systems in the Tensor-Train Format 6.1. Nearest-Neighbor Interaction Systems 6.2. General SLIM Decomposition 6.3. SLIM Decomposition for Markov Generators 7\. Dynamic Mode Decomposition in the Tensor-Train Format 7.1. Moore-Penrose Inverse 7.2. Computation of the Pseudoinverse 7.3. Tensor-Based Dynamic Mode Decomposition 8\. Tensor-Train Approximation of the Perron–Frobenius Operator 8.1. Perron–Frobenius Operator 8.2. Ulam’s Method Part III: Applications of the Tensor-Train Format 9\. Chemical Reaction Networks 9.1. Elementary Reactions 9.2. Chemical Master Equation 9.3. Numerical Experiments 9.3.1. Signaling Cascade 9.3.2. Two-Step Destruction 10\. Heterogeneous Catalysis 10.1. Heterogeneous Catalytic Processes 10.2. Reduced Model for the CO Oxidation at RuO2 10.3. Numerical Experiments 10.3.1. Scaling with System Size 10.3.2. Varying the CO Pressure 10.3.3. Increasing the Oxygen Desorption Rate 11\. Fluid Dynamics 11.1. Computational Fluid Dynamics 11.2. Numerical Examples 11.2.1. Rotating Annulus 11.2.2. Flow Around a Blunt Body 12\. Brownian Dynamics 12.1. Langevin Equation 12.2. Numerical Experiments 12.2.1. Two-Dimensional Triple-Well Potential 12.2.2. Three- Dimensional Quadruple-Well Potential 13\. Summary and Conclusion 14\. References A. Appendix A.1. Proofs A.1.1. Inverse Function for Little-Endian Convention A.1.2. Equivalence of the Master Equation Formulations A.1.3. Equivalence of SLIM Decomposition and Canonical Representation A.1.4. Equivalence of SLIM Decomposition and Canonical Representation for Markovian Master Equations A.1.5. Functional Correctness of Pseudoinverse Algorithm A.2. Algorithms A.2.1. Orthonormalization of Tensor Trains A.2.2. ALS for Systems of Linear Equations A.2.3. MALS for Systems of Linear Equations A.2.4. ALS for Eigenvalue Problems A.2.5. MALS for Eigenvalue Problems A.2.6. Compression of Two-Dimensional TT Operators A.2.7. Construction of SLIM Decompositions for Markovian Master Equations A.3. Deutsche Zusammenfassung (German Summary) A.4. Eidesstattliche Erklärung (Declaration)The simulation and analysis of high-dimensional problems is often infeasible due to the curse of dimensionality. In this thesis, we investigate the potential of tensor decompositions for mitigating this curse when considering systems from several application areas. Using tensor-based solvers, we directly compute numerical solutions of master equations associated with Markov processes on extremely large state spaces. Furthermore, we exploit the tensor-train format to approximate eigenvalues and corresponding eigentensors of linear tensor operators. In order to analyze the dominant dynamics of high- dimensional stochastic processes, we propose several decomposition techniques for highly diverse problems. These include tensor representations for operators based on nearest-neighbor interactions, construction of pseudoinverses for tensor-based reformulations of dimensionality reduction methods, and the approximation of transfer operators of dynamical systems. The results show that the tensor-train format enables us to compute low-rank approximations for various numerical problems as well as to reduce the memory consumption and the computational costs compared to classical approaches significantly. We demonstrate that tensor decompositions are a powerful tool for solving high-dimensional problems from various application areas.In den letzten Jahren sind Tensorzerlegungen zu einem wichtigen Werkzeug sowohl für die mathematische Modellierung von hochdimensionalen Systemen als auch für die Approximation von hochdimensionalen Funktionen geworden. Tensorbasierte Methoden werden bereits in unterschiedlichsten Anwendungsgebieten erfolgreich eingesetzt. Wir betrachten Tensoren als eine Verallgemeinerung von Matrizen mit einer Vielzahl von Indizes. Die Zahl der Elemente eines solchen Tensors – und somit sein Speicherbedarf – wächst dabei exponentiell mit der Zahl der Dimensionen. Dieses Phänomen wird als Fluch der Dimensionalität bezeichnet. Das Interesse in Tensorzerlegungen wächst stetig, da unlängst entwickelte Tensorformate gezeigt haben, dass es möglich ist diesen Fluch zu umgehen und hochdimensional Systeme zu betrachten, welche vorher nicht mit konventionellen numerischen Methoden untersucht werden konnten. Typische Anwendungsbereiche umfassen das Lösen von linearen Gleichungssystemen, Eigenwertproblemen und gewöhnlichen wie auch partiellen Differentialgleichungen. Die hier vorgestellten Methoden umfassen die tensorbasierte Darstellung von Markovschen Mastergleichungen, die Tensorzerlegung von linearen Operatoren bezüglich Nächste-Nachbarn- Interaktionen, die tensorbasierte Erweiterung der Dynamic Mode Decomposition und die Approximation des Perron-Frobenius-Operators. Dabei konzentrieren wir uns in dieser Arbeit auf das sogenannte Tensor-Train-Format. Unsere Experimente zeigen, dass wir mithilfe dieser Darstellung präzise Approximationen der Lösungen von linearen Gleichungssystemen und Eigenwertproblemen bestimmen können, um zum Beispiel stationäre Wahrscheinlichkeitsverteilungen zu berechnen. Im Vergleich zu klassischen Methoden ist es dabei möglich den Rechenaufwand und die damit verbundene Rechenzeit deutlich zu senken. Wir sind somit in der Lage, Einblicke in die Dynamiken und Strukturen von hochdimensionalen Systemen zu gewinnen. Unserer Auffassung nach, bilden die hier präsentierten Methoden einen weiteren Beitrag zu den Anwendungsmöglichkeiten von Tensorzerlegungen
    corecore